17 research outputs found

    A Grasping-centered Analysis for Cloth Manipulation

    Get PDF
    Compliant and soft hands have gained a lot of attention in the past decade because of their ability to adapt to the shape of the objects, increasing their effectiveness for grasping. However, when it comes to grasping highly flexible objects such as textiles, we face the dual problem: it is the object that will adapt to the shape of the hand or gripper. In this context, the classic grasp analysis or grasping taxonomies are not suitable for describing textile objects grasps. This work proposes a novel definition of textile object grasps that abstracts from the robotic embodiment or hand shape and recovers concepts from the early neuroscience literature on hand prehension skills. This framework enables us to identify what grasps have been used in literature until now to perform robotic cloth manipulation, and allows for a precise definition of all the tasks that have been tackled in terms of manipulation primitives based on regrasps. In addition, we also review what grippers have been used. Our analysis shows how the vast majority of cloth manipulations have relied only on one type of grasp, and at the same time we identify several tasks that need more variety of grasp types to be executed successfully. Our framework is generic, provides a classification of cloth manipulation primitives and can inspire gripper design and benchmark construction for cloth manipulation.Comment: 13 pages, 4 figures, 4 tables. Accepted for publication at IEEE Transactions on Robotic

    A Comparison of Three Methods for Measure of Time to Contact

    Get PDF
    International audienceTime to Contact (TTC) is a biologically inspired method for obstacle detection and reactive control of motion that does not require scene reconstruction or 3D depth estimation. Estimating TTC is difficult because it requires a stable and reliable estimate of the rate of change of distance between image features. In this paper we propose a new method to measure time to contact, Active Contour Affine Scale (ACAS). We experimentally and analytically compare ACAS with two other recently proposed methods: Scale Invariant Ridge Segments (SIRS), and Image Brightness Derivatives (IBD). Our results show that ACAS provides a more accurate estimation of TTC when the image flow may be approximated by an affine transformation, while SIRS provides an estimate that is generally valid, but may not always be as accurate as ACAS, and IBD systematically over-estimate time to contact

    'Elbows Out' - Predictive tracking of partially occluded pose for Robot-Assisted dressing

    Get PDF
    © 2016 IEEE. Robots that can assist in the activities of daily living, such as dressing, may support older adults, addressing the needs of an aging population in the face of a growing shortage of care professionals. Using depth cameras during robot-assisted dressing can lead to occlusions and loss of user tracking, which may result in unsafe trajectory planning or prevent the planning task proceeding altogether. For the dressing task of putting on a jacket, which is addressed in this letter, tracking of the arm is lost when the user's hand enters the jacket, which may lead to unsafe situations for the user and a poor interaction experience. Using motion tracking data, free from occlusions, gathered from a human-human interaction study on an assisted dressing task, recurrent neural network models were built to predict the elbow position of a single arm based on other features of the user pose. The best features for predicting the elbow position were explored by using regression trees indicating the hips and shoulder as possible predictors. Engineered features were also created based on observations of real dressing scenarios and their effectiveness explored. Comparison between position and orientation-based datasets was also included in this study. A 12-fold cross-validation was performed for each feature set and repeated 20 times to improve statistical power. Using position-based data, the elbow position could be predicted with a 4.1 cm error but adding engineered features reduced the error to 2.4 cm. Adding orientation information to the data did not improve the accuracy and aggregating univariate response models failed to make significant improvements. The model was evaluated on Kinect data for a robot dressing task and although not without issues, demonstrates potential for this application. Although this has been demonstrated for jacket dressing, the technique could be applied to a number of different situations during occluded tracking

    Personalized robot assistant for support in dressing

    Get PDF
    Robot-assisted dressing is performed in close physical interaction with users who may have a wide range of physical characteristics and abilities. Design of user adaptive and personalized robots in this context is still indicating limited, or no consideration, of specific user-related issues. This paper describes the development of a multi-modal robotic system for a specific dressing scenario - putting on a shoe, where users’ personalized inputs contribute to a much improved task success rate. We have developed: 1) user tracking, gesture recognition andposturerecognitionalgorithmsrelyingonimagesprovidedby a depth camera; 2) a shoe recognition algorithm from RGB and depthimages;3)speechrecognitionandtext-to-speechalgorithms implemented to allow verbal interaction between the robot and user. The interaction is further enhanced by calibrated recognition of the users’ pointing gestures and adjusted robot’s shoe delivery position. A series of shoe fitting experiments have been performed on two groups of users, with and without previous robot personalization, to assess how it affects the interaction performance. Our results show that the shoe fitting task with the personalized robot is completed in shorter time, with a smaller number of user commands and reduced workload

    A review and comparison of ontology-based approaches to robot autonomy

    Get PDF
    Within the next decades, robots will need to be able to execute a large variety of tasks autonomously in a large variety of environments. To relax the resulting programming effort, a knowledge-enabled approach to robot programming can be adopted to organize information in re-usable knowledge pieces. However, for the ease of reuse, there needs to be an agreement on the meaning of terms. A common approach is to represent these terms using ontology languages that conceptualize the respective domain. In this work, we will review projects that use ontologies to support robot autonomy. We will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.Peer ReviewedPostprint (author's final draft

    Plant leaf imaging using time of flight camera under sunlight, shadow and room conditions

    No full text
    In this article, we analyze the effects of ambient light on Time of Flight (ToF) depth imaging for a plant’s leaf in sunlight, shadow and room conditions. ToF imaging is sensitive to ambient light and we try to find the best possible integration times (IT) for each condition. This is important in order to optimize camera calibration. Our analysis is based on several statistical metrics estimated from the ToF data. We explain the estimation of the metrics and propose a method of predicting the deteriorating behavior of the data in each condition using camera flags. Finally, we also propose a method to improve the quality of a ToF image taken in a mixed condition having different ambient light exposures

    Continuous and discrete-time sliding mode control design techniques

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN022434 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Historical development of the SWATH ship concept

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:9110.2555(NAOE--87-44) / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Robotized Plant Probing: Leaf Segmentation Utilizing Time-of-Flight Data

    No full text

    Learning Relational Dynamics of Stochastic Domains for Planning

    No full text
    International audienceProbabilistic planners are very flexible tools that can provide good solutions for difficult tasks. However, they rely on a model of the domain, which may be costly to either hand code or automatically learn for complex tasks. We propose a new learning approach that (a) requires only a set of state transitions to learn the model; (b) can cope with uncertainty in the effects; (c) uses a relational representation to generalize over different objects ; and (d) in addition to action effects, it can also learn exogenous effects that are not related to any action , e.g., moving objects, endogenous growth and natural development. The proposed learning approach combines a multi-valued variant of inductive logic programming for the generation of candidate models, with an optimization method to select the best set of planning operators to model a problem. Finally, experimental validation is provided that shows improvements over previous work
    corecore